Pre-Trained Models

To help you get started quickly with implementing Dragonfly’s deep learning solutions, you can download a selection of deep models pre-trained by the Dragonfly Team. Starting with a pre-trained model often provides better and faster results with smaller training sets than using an untrained model. In the examples below, the progress of training with a pre-trained and an untrained deep model are shown as a function of increasingly larger training sets — from 2, 4, 8, 16, to 32 labeled patches.

Validation Dice coefficients for pre-trained (in orange) versus untrained (in blue) U-Net dl-5 ifc-64 models using increasingly larger training sets

You should note that the objective when training a neural network is to identify the correct weights for the network by multiple forward and backward iterations. In the case of using pre-trained models for semantic segmentation tasks, you take advantage of learned feature maps to reduce laborious and time-consuming labeling of large training sets.

Refer to the topic and tutorial Transfer learning and fine-tuning (available at www.tensorflow.org/tutorials/images/transfer_learning) for additional information about working with pre-trained models.